Position modeling plays a critical role in Transformers. In this paper, we focus on length extrapolation, i.e., training on short texts while evaluating longer sequences. We define attention resolution as an indicator of extrapolation. Then we propose two designs to improve the above metric of Transformers. Specifically, we introduce a relative position embedding to explicitly maximize attention resolution. Moreover, we use blockwise causal attention during inference for better resolution. We evaluate different Transformer variants with language modeling. Experimental results show that our model achieves strong performance in both interpolation and extrapolation settings. The code will be available at https://aka.ms/LeX-Transformer.
translated by 谷歌翻译
Pre-trained models have achieved remarkable success in natural language processing (NLP). However, existing pre-training methods underutilize the benefits of language understanding for generation. Inspired by the idea of Generative Adversarial Networks (GANs), we propose a GAN-style model for encoder-decoder pre-training by introducing an auxiliary discriminator, unifying the ability of language understanding and generation in a single model. Our model, named as GanLM, is trained with two pre-training objectives: replaced token detection and replaced token denoising. Specifically, given masked source sentences, the generator outputs the target distribution and the discriminator predicts whether the target sampled tokens from distribution are incorrect. The target sentence is replaced with misclassified tokens to construct noisy previous context, which is used to generate the gold sentence. In general, both tasks improve the ability of language understanding and generation by selectively using the denoising data. Extensive experiments in language generation benchmarks show that GanLM with the powerful language understanding capability outperforms various strong pre-trained language models (PLMs) and achieves state-of-the-art performance.
translated by 谷歌翻译
基础模型由于在广泛的下游应用中的有效性而受到了很多关注。尽管在体系结构方面存在很大的融合,但大多数审慎的模型通常仍用于特定任务或模式。在这项工作中,我们建议将语言模型用作各种基础模型的通用接口。一系列预处理的编码者感知到了多种方式(例如视觉和语言),并与扮演通用任务层角色的语言模型对接。我们提出了一个半伴侣的语言建模目标,以共同确定界面和模块化编码器。我们从因果关系和非因果建模中涵盖了优势和能力,从而结合了两个世界的最佳状态。具体而言,所提出的方法不仅从因果语言建模中继承了内在学习和开放式生成的能力,而且由于双向编码器而有利于填补。更重要的是,我们的方法无缝地解锁了上述功能的组合,例如,通过填充编码器启用了文本学习或指导。各种仅语言和视觉语言基准的实验结果表明,我们的模型表现优于或与鉴定,零弹性概括和几乎没有的学习的专业模型竞争。
translated by 谷歌翻译
最近几天见证了针对预训练的语言模型(PTM)的各种知识注入模型。但是,大多数以前的研究都忽略了PTMS自己的能力,其能力存储在参数中。最近的一项研究观察到了饲料远期网络(FFN)中的知识神经元,该神经元负责表达事实知识。在这项工作中,我们提出了一个简单的模型,即Kformer,该模型利用PTMS中存储的知识和外部知识通过变压器FFN层中的知识注入。从经验上讲,两项知识密集型任务,常识性推理(即社会问题)和医学问题答案(即MEDQA-USMLE),表明Kformer可以比其他知识注入技术(例如关注或基于注意的注射)产生更好的性能。我们认为,提出的简单模型和经验发现可能对社区开发更强大的知识注入方法可能有所帮助。代码在https://github.com/zjunlp/kformer中可用。
translated by 谷歌翻译
我们开发了从运动管道的结构中恢复损坏的keypoint匹配的新统计信息。统计信息基于Keypoint匹配图的群集结构中出现的一致性约束。统计数据旨在为损坏的匹配和未损坏的匹配提供较小的值。这些新统计数据与迭代重新重量方案相结合以过滤关键点,然后可以将其从运动管道馈送到任何标准结构中。可以有效地实现该滤波方法并将其缩放到大规模的数据集,因为它仅需要稀疏矩阵乘法。我们展示了这种方法对来自运动数据集的合成和实际结构的功效,并表明它在这些任务中实现了最先进的准确性和速度。
translated by 谷歌翻译
本报告介绍了在大型多语种计算机翻译中为WMT21共享任务的Microsoft的机器翻译系统。我们参加了所有三种评估轨道,包括大轨道和两个小轨道,前者是无约束的,后两者完全受约束。我们的模型提交到共享任务的初始化用deltalm \脚注{\ url {https://aka.ms/deltalm}},一个通用的预训练的多语言编码器 - 解码器模型,并相应地使用巨大的收集并行进行微调数据和允许的数据源根据轨道设置,以及应用逐步学习和迭代背翻译方法进一步提高性能。我们的最终提交在自动评估度量方面排名第一的三条轨道。
translated by 谷歌翻译
专家(MOE)的稀疏混合物由于具有负担得起的计算开销而有希望的缩放能力,因此引起了极大的兴趣。 Moe将密集的层转换为稀疏的专家,并利用封闭式路由网络使专家有条件地激活。但是,随着专家的数量的增长,带有残酷参数的MOE会受到过度拟合和稀疏数据分配的影响。此类问题在数据有限的任务上尤为严重,因此阻碍了MOE模型通过扩展来提高性能的进度。在这项工作中,我们提出了专家群集的混合 - 一种通用方法,可以使专家层通过在路由阶段施加基于方差的约束来学习更多多样化和适当的知识。我们进一步提出了专门为专家集群结构设计的集群级专家辍学策略。我们的实验表明,MEEC可以提高机器翻译和自然语言理解任务的性能,并提高在有限数据下扩展专家的性能上限。我们还验证了MEEC在缓解过度拟合和稀疏数据分配中起积极的作用。
translated by 谷歌翻译
在以前的作品中广泛讨论了句子语义相似性的原始伯特的表现不佳。我们发现不满意的性能主要是由于静态令牌嵌入偏差和无效的伯特层,而不是姓氏的高余弦相似性。为此,我们提出了一个迅速的句子嵌入方法,可以减少令牌嵌入偏差,使原始伯特层更有效。通过将句子嵌入式任务重新塑造为填充空白问题,我们的方法显着提高了原始伯特的性能。我们讨论了两个提示符,表示基于及时的句子嵌入的三个提示搜索方法。此外,我们提出了一种通过模板去噪技术的新型无监督培训目标,这大大缩短了监督和无人监督的环境之间的性能差距。对于实验,我们评估我们在非微调和微调的设置上的方法。即使是非微调方法也可以优于STS任务上的无监督服务器等微调的方法。我们的微调方法在无监督和监督设置中优于最先进的方法SIMCSE。与SIMCSE相比,我们分别在无监督环境下实现了2.29和2.58点的伯特和罗伯塔的改进。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译